Video paragraph captioning aims to generate a multi-sentence description of an untrimmed video with several temporal event locations in coherent storytelling. Following the human perception process, where the scene is effectively understood by decomposing it into visual (e.g. human, animal) and non-visual components (e.g. action, relations) under the mutual influence of vision and language, we first propose a visual-linguistic (VL) feature. In the proposed VL feature, the scene is modeled by three modalities including (i) a global visual environment; (ii) local visual main agents; (iii) linguistic scene elements. We then introduce an autoregressive Transformer-in-Transformer (TinT) to simultaneously capture the semantic coherence of intra- and inter-event contents within a video. Finally, we present a new VL contrastive loss function to guarantee learnt embedding features are matched with the captions semantics. Comprehensive experiments and extensive ablation studies on ActivityNet Captions and YouCookII datasets show that the proposed Visual-Linguistic Transformer-in-Transform (VLTinT) outperforms prior state-of-the-art methods on accuracy and diversity.
translated by 谷歌翻译
野火预测一直是人文学科蓬勃发展的最关键任务之一。它在保护人类生活中起着至关重要的作用。另一方面,由于其随机和混乱的特性,野火预测很困难。我们通过将一系列野火图像解释为视频来解决问题,并用它来预测火灾将来的表现。但是,创建说明未来固有不确定性的视频预测模型是具有挑战性的。已发布的大部分尝试都是基于随机图像 - 自动回调的复发网络,该网络增加了各种性能和应用困难,例如计算成本和大量数据集的效率有限。另一种可能性是使用结合框架合成和时间动力学的完全潜在的时间模型。但是,由于设计和培训问题,文献中尚未提出过随机视频预测的这种模型。本文通过引入一种新型的随机时间模型来解决这些问题,该模型的动力学在潜在空间中驱动。它自然可以通过允许我们更轻巧,更容易解释的潜在模型来击败GOY-16数据集上的先前最新方法来预测视频动态。结果将与各种基准模型进行比较。
translated by 谷歌翻译
量子神经网络在嘈杂的中间量子时代的广泛应用方面有希望。因此,对自动量子神经架构搜索的需求不断增长。我们通过设计高斯工艺的贝叶斯优化的量子电路指标来应对这一挑战。为了实现这一目标,我们提出了一个新的量子门距离,该距离距离,以每个量子状态的行动为特征,并就其几何特性提供理论观点。我们的方法极大地超过了三个经验量子机学习问题的基准,包括培训量子生成的对抗网络,在MaxCut问题中求解组合优化以及模拟量子傅立叶变换。我们的方法可以扩展以表征各种量子机学习模型的行为。
translated by 谷歌翻译
在本文中,我们根据基本属性提供了一种潜在的变量公式和解决方案,以期望任何合理的解决方案都可以满足任何合理的解决方案。具体而言,我们检查了一种新颖的张量完成方法,以有效,准确地学习模型的参数,以确保用户评级的不可观察的个人喜好。通过使用单个潜在不变式将张量分解正规化,我们为可靠的推荐系统实现了三个属性:(1)张量完成结果的唯一性,具有最小的假设,(2)独立于用户的任意偏好和(( 3)共识订购保证,可在观察到的评分分数和未观察到的评分之间提供一致的排名。我们的算法导致一个简单而优雅的推荐框架,具有线性计算复杂性,没有超参数调整。我们提供的经验结果表明,该方法显着优于当前最新方法。
translated by 谷歌翻译
在本文中,我们利用涉及视觉和语言互动的人类感知过程来生成对未修剪视频的连贯段落描述。我们提出了视觉语言(VL)功能,这些功能由两种模态组成,即(i)视觉方式,以捕获整个场景的全局视觉内容以及(ii)语言方式来提取人类和非人类对象的场景元素描述(例如,动物,车辆等),视觉和非视觉元素(例如关系,活动等)。此外,我们建议在对比度学习VL损失下培训我们提出的VLCAP。有关活动网字幕和YouCookii数据集的实验和消融研究表明,我们的VLCAP在准确性和多样性指标上都优于现有的SOTA方法。
translated by 谷歌翻译
There is a growing interest in developing unlearnable examples (UEs) against visual privacy leaks on the Internet. UEs are training samples added with invisible but unlearnable noise, which have been found can prevent unauthorized training of machine learning models. UEs typically are generated via a bilevel optimization framework with a surrogate model to remove (minimize) errors from the original samples, and then applied to protect the data against unknown target models. However, existing UE generation methods all rely on an ideal assumption called label-consistency, where the hackers and protectors are assumed to hold the same label for a given sample. In this work, we propose and promote a more practical label-agnostic setting, where the hackers may exploit the protected data quite differently from the protectors. E.g., a m-class unlearnable dataset held by the protector may be exploited by the hacker as a n-class dataset. Existing UE generation methods are rendered ineffective in this challenging setting. To tackle this challenge, we present a novel technique called Unlearnable Clusters (UCs) to generate label-agnostic unlearnable examples with cluster-wise perturbations. Furthermore, we propose to leverage VisionandLanguage Pre-trained Models (VLPMs) like CLIP as the surrogate model to improve the transferability of the crafted UCs to diverse domains. We empirically verify the effectiveness of our proposed approach under a variety of settings with different datasets, target models, and even commercial platforms Microsoft Azure and Baidu PaddlePaddle.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译
In this work, we propose a new approach that combines data from multiple sensors for reliable obstacle avoidance. The sensors include two depth cameras and a LiDAR arranged so that they can capture the whole 3D area in front of the robot and a 2D slide around it. To fuse the data from these sensors, we first use an external camera as a reference to combine data from two depth cameras. A projection technique is then introduced to convert the 3D point cloud data of the cameras to its 2D correspondence. An obstacle avoidance algorithm is then developed based on the dynamic window approach. A number of experiments have been conducted to evaluate our proposed approach. The results show that the robot can effectively avoid static and dynamic obstacles of different shapes and sizes in different environments.
translated by 谷歌翻译
Ultrasound is progressing toward becoming an affordable and versatile solution to medical imaging. With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging as it requires trained operators in close proximity to patients for long period of time. In this work, we investigate the important yet seldom-studied problem of scan target localization, under the setting of lung ultrasound imaging. We propose a purely vision-based, data driven method that incorporates learning-based computer vision techniques. We combine a human pose estimation model with a specially designed regression model to predict the lung ultrasound scan targets, and deploy multiview stereo vision to enhance the consistency of 3D target localization. While related works mostly focus on phantom experiments, we collect data from 30 human subjects for testing. Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69){\deg} for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets. Moreover, our approach can serve as a general solution to other types of ultrasound modalities. The code for implementation has been released.
translated by 谷歌翻译
Independent component analysis (ICA) is a blind source separation method to recover source signals of interest from their mixtures. Most existing ICA procedures assume independent sampling. Second-order-statistics-based source separation methods have been developed based on parametric time series models for the mixtures from the autocorrelated sources. However, the second-order-statistics-based methods cannot separate the sources accurately when the sources have temporal autocorrelations with mixed spectra. To address this issue, we propose a new ICA method by estimating spectral density functions and line spectra of the source signals using cubic splines and indicator functions, respectively. The mixed spectra and the mixing matrix are estimated by maximizing the Whittle likelihood function. We illustrate the performance of the proposed method through simulation experiments and an EEG data application. The numerical results indicate that our approach outperforms existing ICA methods, including SOBI algorithms. In addition, we investigate the asymptotic behavior of the proposed method.
translated by 谷歌翻译